2,092 research outputs found

    Hybrid Numerical-Analytical Scheme for Locally Inhomogeneous Elastic Waveguides

    Get PDF
    Numerical simulation of guided wave excitation, propagation, and diffraction in laminate structures with local inhomogeneities (obstacles) is associated with high computational cost due to the need for a mesh-based approximation of extended domains with a rigorous account for the radiation conditions at infinity. To obtain computationally efficient solutions, hybrid numerical-analytical approaches are currently being developed, based on linking a numerical solution in a local vicinity of the source and/or obstacles with an explicit analytical representation in the external semi-infinite domain. However, the developed methods are generally not widely spread because the possibility of such coupling with an external multimode wave field is generally not provided in standard finite-element (FE) software. We propose a scheme that allows the use of the FE software as a black box for the required correct matching of local numerical and global analytical solutions (FEM-An). The FEM is used to obtain a set of local numerical solutions that serve as a basis in the inner domain. These solutions satisfy the boundary conditions induced by guided wave modes so that they fit correctly with the modal expansion in the outer region. The expansion coefficients of both FE and modal decompositions are determined then from the condition of stress and displacement continuity at the interface between the inner and outer domains. This scheme was numerically validated against analytical solutions to test problems and FE solutions for long waveguide sections with perfect match layer absorbing conditions at the ends (FEM PML). Along the way, it turned out that the FEM-PML approach gives an incorrect result in the backward-wave bands and at high frequencies. The application of the FEM-An hybrid scheme is illustrated by examples of Lamb wave diffraction by elastic inclusions and delaminations

    Digital simulators of the random processes

    Get PDF
    The proposed universal digital simulators of random processes based on their Markov models are considered as capable of generating sequences of samples of unlimited duration. It is shown that a simple Markov chain allows generating the random numbers with a specified two-dimensional probability distribution of the neighboring values while a doubly connected Markov model makes it possible to get the three-dimensional random numbers. The parameters of the model are determined from either a known probability density or experimental samples of the simulated random process. It is demonstrated that the simulation algorithms do not require complex mathematical transformations and that they can be implemented using a simple element base. To change the properties of the generated random processes one needs to reload the memory device with a preformed data array. The block diagrams of the simulators are studied and the probabilistic and correlation characteristics of the generated random processes are determined. It is established that with these simulators a high accuracy of convergence of the probability distributions of the selected model and the histograms of the generated sample sequences is ensured. In the common studies, one can hardly find the results that can surpass by their efficiency the ones that the proposed simulation algorithms demonstrate accounting for their non-problematic hardware implementation (the minimum computational costs) and the simplicity of reconfiguring the Markov model based simulators for generating new random processes. The introduced simulators can be used in the design, development and testing of the multi-purpose electronic equipment, with different meters and the devices for simulating radio paths

    Identification of distinguishing characteristics of intersections based on statistical analysis and data from video cameras

    Get PDF
    The article discusses the issues of improving the collection of traffic information using video cameras and the statistical processing of collected data. The aim of the article was to identify the main patterns of traffic at intersections in traffic congestion and to develop an analysis technique to improve traffic management at intersections. In modern conditions, there is a sharp increase in the number of vehicles, which leads to negative consequences, such as an increase in travel time, additional fuel consumption, increased risk of traffic accidents and others. To solve the problem of improving traffic control at intersections, it is necessary to have a reliable information collection system and apply modern effective methods of processing the collected information. The purpose of this article is to determine the most important traffic characteristics that affect the throughput of intersections. As a criterion for the cross-pass ability of the intersection, the actual number of passing cars during the permission signal of the torch light is taken. Using multivariate regression analysis, a model was developed to predict intersection throughput taking into account the most important traffic characteristics. Analysis of the throughput of intersections using the fuzzy logic method confirmed the correctness of the developed model. In addition, based on the results of processing information collected at 20 intersections and including 597 observations, a methodology was developed for determining the similarity of traffic intersections. This allows us to identify homogeneous types of intersections and to develop typical traffic management techniques in the city, instead of individually managing each node of the city’s transport network individually. The results obtained lead to a significant reduction in costs for the organization of rational traffic flows. Document type: Articl

    Relativistic Perturbation Theory Formalism to Computing Spectra and Radiation Characteristics: Application to Heavy Elements

    Get PDF
    Fundamentals of gauge-invariant relativistic many-body perturbation theory (PT) with optimized ab initio zeroth approximation in theory of relativistic multi-electron systems are presented. The problem of construction of optimal one-electron representation is directly linked with a problem of the correct accounting for multielectron exchange-correlation effects and gauge-invariance principle fulfilling in atomic calculations. New approach to construction of optimal PT zeroth approximation is based on accurate treating the lowest order multielectron effects, in particular, the gauge-dependent radiative contribution for the certain class of photon propagator (for instance, the Coulomb, Feynman, Babushkin ones) gauge. This value is considered to be a typical representative of important multielectron exchange-correlation effects, whose minimization is a reasonable criteria in the searching for optimal PT one-electron orbital basis. This procedure derives an undoubted profit in the routine many-body calculations as it provides the way of refinement of the atomic characteristics calculations, based on the “first principles”. The relativistic density-functional approximation is taken as the zeroth one. There have taken into account all exchange-correlation corrections of the second order and dominated classes of the higher orders diagrams (polarization interaction, quasiparticles screening, etc.). New form of multi-electron polarization functional is used. As illustration, the results of computing energies, transition probabilities for some heavy ions are presented

    Theoretical Spectroscopy of Rare-Earth Elements: Spectra and Autoionization Resonances

    Get PDF
    An investigation of spectra, radiative and autoionization characteristics for the rare-earth elements is of a great interest as for development atomic spectroscopy as different applications in plasma chemistry, astrophysics, laser physics, quantum electronics etc. We present and review the results of studying spectra and autoionization resonance characteristics of a few lanthanide elements (ytterbium and thulium). Computing the spectra and autoionization resonance parameters is carried out within the relativistic many-body perturbation theory (RMBPT) and generalized relativistic energy approach. The accurate results on the autoionization resonance energies and widths are presented with correct accounting for the exchange-correlation and relativistic corrections and compared with other available theoretical and experimental data. In this chapter, we present a brief review of the theoretical and experimental works on spectroscopy of some lanthanide atoms. Spectroscopy of the Rydberg autoionization resonances in rare-earth atoms in an external electromagnetic field is expected to be very complex and unusual

    OpenAssistant Conversations -- Democratizing Large Language Model Alignment

    Full text link
    Aligning large language models (LLMs) with human preferences has proven to drastically improve usability and has driven rapid adoption as demonstrated by ChatGPT. Alignment techniques such as supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) greatly reduce the required skill and domain knowledge to effectively harness the capabilities of LLMs, increasing their accessibility and utility across various domains. However, state-of-the-art alignment techniques like RLHF rely on high-quality human feedback data, which is expensive to create and often remains proprietary. In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations, a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 complete and fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers. Models trained on OpenAssistant Conversations show consistent improvements on standard benchmarks over respective base models. We release our code and data under a fully permissive licence.Comment: Published in NeurIPS 2023 Datasets and Benchmark

    Combined search for the quarks of a sequential fourth generation

    Get PDF
    Results are presented from a search for a fourth generation of quarks produced singly or in pairs in a data set corresponding to an integrated luminosity of 5 inverse femtobarns recorded by the CMS experiment at the LHC in 2011. A novel strategy has been developed for a combined search for quarks of the up and down type in decay channels with at least one isolated muon or electron. Limits on the mass of the fourth-generation quarks and the relevant Cabibbo-Kobayashi-Maskawa matrix elements are derived in the context of a simple extension of the standard model with a sequential fourth generation of fermions. The existence of mass-degenerate fourth-generation quarks with masses below 685 GeV is excluded at 95% confidence level for minimal off-diagonal mixing between the third- and the fourth-generation quarks. With a mass difference of 25 GeV between the quark masses, the obtained limit on the masses of the fourth-generation quarks shifts by about +/- 20 GeV. These results significantly reduce the allowed parameter space for a fourth generation of fermions.Comment: Replaced with published version. Added journal reference and DO
    • …
    corecore